536 research outputs found

    The Complexity of the Simplex Method

    Get PDF
    The simplex method is a well-studied and widely-used pivoting method for solving linear programs. When Dantzig originally formulated the simplex method, he gave a natural pivot rule that pivots into the basis a variable with the most violated reduced cost. In their seminal work, Klee and Minty showed that this pivot rule takes exponential time in the worst case. We prove two main results on the simplex method. Firstly, we show that it is PSPACE-complete to find the solution that is computed by the simplex method using Dantzig's pivot rule. Secondly, we prove that deciding whether Dantzig's rule ever chooses a specific variable to enter the basis is PSPACE-complete. We use the known connection between Markov decision processes (MDPs) and linear programming, and an equivalence between Dantzig's pivot rule and a natural variant of policy iteration for average-reward MDPs. We construct MDPs and show PSPACE-completeness results for single-switch policy iteration, which in turn imply our main results for the simplex method

    Capturing player enjoyment in computer games

    Get PDF
    The current state-of-the-art in intelligent game design using Artificial Intelligence (AI) techniques is mainly focused on generating human-like and intelligent characters. Even though complex opponent behaviors emerge through various machine learning techniques, there is generally no further analysis of whether these behaviors contribute to the satisfaction of the player. The implicit hypothesis motivating this research is that intelligent opponent behaviors enable the player to gain more satisfaction from the game. This hypothesis may well be true; however, since no notion of entertainment or enjoyment is explicitly defined, there is therefore no evidence that a specific opponent behavior generates enjoyable games.peer-reviewe

    The node-deletion problem for hereditary properties is NP-complete

    Get PDF
    AbstractWe consider the family of graph problems called node-deletion problems, defined as follows; For a fixed graph property Π, what is the minimum number of nodes which must be deleted from a given graph so that the resulting subgraph satisfies Π? We show that if Π is nontrivial and hereditary on induced subgraphs, then the node-deletion problem for Π is NP-complete for both undirected and directed graphs

    Model Checking Probabilistic Pushdown Automata

    Get PDF
    We consider the model checking problem for probabilistic pushdown automata (pPDA) and properties expressible in various probabilistic logics. We start with properties that can be formulated as instances of a generalized random walk problem. We prove that both qualitative and quantitative model checking for this class of properties and pPDA is decidable. Then we show that model checking for the qualitative fragment of the logic PCTL and pPDA is also decidable. Moreover, we develop an error-tolerant model checking algorithm for PCTL and the subclass of stateless pPDA. Finally, we consider the class of omega-regular properties and show that both qualitative and quantitative model checking for pPDA is decidable

    On the existence of 0/1 polytopes with high semidefinite extension complexity

    Full text link
    In Rothvo\ss{} it was shown that there exists a 0/1 polytope (a polytope whose vertices are in \{0,1\}^{n}) such that any higher-dimensional polytope projecting to it must have 2^{\Omega(n)} facets, i.e., its linear extension complexity is exponential. The question whether there exists a 0/1 polytope with high PSD extension complexity was left open. We answer this question in the affirmative by showing that there is a 0/1 polytope such that any spectrahedron projecting to it must be the intersection of a semidefinite cone of dimension~2^{\Omega(n)} and an affine space. Our proof relies on a new technique to rescale semidefinite factorizations

    On the Impact of Fair Best Response Dynamics

    Get PDF
    In this work we completely characterize how the frequency with which each player participates in the game dynamics affects the possibility of reaching efficient states, i.e., states with an approximation ratio within a constant factor from the price of anarchy, within a polynomially bounded number of best responses. We focus on the well known class of congestion games and we show that, if each player is allowed to play at least once and at most β\beta times any TT best responses, states with approximation ratio O(β)O(\beta) times the price of anarchy are reached after TloglognT \lceil \log \log n \rceil best responses, and that such a bound is essentially tight also after exponentially many ones. One important consequence of our result is that the fairness among players is a necessary and sufficient condition for guaranteeing a fast convergence to efficient states. This answers the important question of the maximum order of β\beta needed to fast obtain efficient states, left open by [9,10] and [3], in which fast convergence for constant β\beta and very slow convergence for β=O(n)\beta=O(n) have been shown, respectively. Finally, we show that the structure of the game implicitly affects its performances. In particular, we show that in the symmetric setting, in which all players share the same set of strategies, the game always converges to an efficient state after a polynomial number of best responses, regardless of the frequency each player moves with

    Sufficient Conditions for Tuza's Conjecture on Packing and Covering Triangles

    Full text link
    Given a simple graph G=(V,E)G=(V,E), a subset of EE is called a triangle cover if it intersects each triangle of GG. Let νt(G)\nu_t(G) and τt(G)\tau_t(G) denote the maximum number of pairwise edge-disjoint triangles in GG and the minimum cardinality of a triangle cover of GG, respectively. Tuza conjectured in 1981 that τt(G)/νt(G)2\tau_t(G)/\nu_t(G)\le2 holds for every graph GG. In this paper, using a hypergraph approach, we design polynomial-time combinatorial algorithms for finding small triangle covers. These algorithms imply new sufficient conditions for Tuza's conjecture on covering and packing triangles. More precisely, suppose that the set TG\mathscr T_G of triangles covers all edges in GG. We show that a triangle cover of GG with cardinality at most 2νt(G)2\nu_t(G) can be found in polynomial time if one of the following conditions is satisfied: (i) νt(G)/TG13\nu_t(G)/|\mathscr T_G|\ge\frac13, (ii) νt(G)/E14\nu_t(G)/|E|\ge\frac14, (iii) E/TG2|E|/|\mathscr T_G|\ge2. Keywords: Triangle cover, Triangle packing, Linear 3-uniform hypergraphs, Combinatorial algorithm

    Quasi-Birth-Death Processes, Tree-Like QBDs, Probabilistic 1-Counter Automata, and Pushdown Systems

    Get PDF
    We begin by observing that (discrete-time) Quasi-Birth-Death Processes (QBDs) are equivalent, in a precise sense, to probabilistic 1-Counter Automata (p1CAs), and both Tree-Like QBDs (TL-QBDs) and Tree-Structured QBDs (TS-QBDs) are equivalent to both probabilistic Pushdown Systems (pPDSs) and Recursive Markov Chains (RMCs). We then proceed to exploit these connections to obtain a number of new algorithmic upper and lower bounds for central computational problems about these models. Our main result is this: for an arbitrary QBD, we can approximate its termination probabilities (i.e., its GG matrix) to within ii bits of precision (i.e., within additive error 1/2i1/2^i), in time polynomial in \underline{both} the encoding size of the QBD and in ii, in the unit-cost rational arithmetic RAM model of computation. Specifically, we show that a decomposed Newton's method can be used to achieve this. We emphasize that this bound is very different from the well-known ``linear/quadratic convergence'' of numerical analysis, known for QBDs and TL-QBDs, which typically gives no constructive bound in terms of the encoding size of the system being solved. In fact, we observe (based on recent results) that for the more general TL-QBDs such a polynomial upper bound on Newton's method fails badly. Our upper bound proof for QBDs combines several ingredients: a detailed analysis of the structure of 1-counter automata, an iterative application of a classic condition number bound for errors in linear systems, and a very recent constructive bound on the performance of Newton's method for strongly connected monotone systems of polynomial equations. We show that the quantitative termination decision problem for QBDs (namely, ``is Gu,v1/2G_{u,v} \geq 1/2?'') is at least as hard as long standing open problems in the complexity of exact numerical computation, specifically the square-root sum problem. On the other hand, it follows from our earlier results for RMCs that any non-trivial approximation of termination probabilities for TL-QBDs is sqrt-root-sum-hard

    Efficient Equilibria in Polymatrix Coordination Games

    Get PDF
    We consider polymatrix coordination games with individual preferences where every player corresponds to a node in a graph who plays with each neighbor a separate bimatrix game with non-negative symmetric payoffs. In this paper, we study α\alpha-approximate kk-equilibria of these games, i.e., outcomes where no group of at most kk players can deviate such that each member increases his payoff by at least a factor α\alpha. We prove that for α2\alpha \ge 2 these games have the finite coalitional improvement property (and thus α\alpha-approximate kk-equilibria exist), while for α<2\alpha < 2 this property does not hold. Further, we derive an almost tight bound of 2α(n1)/(k1)2\alpha(n-1)/(k-1) on the price of anarchy, where nn is the number of players; in particular, it scales from unbounded for pure Nash equilibria (k=1)k = 1) to 2α2\alpha for strong equilibria (k=nk = n). We also settle the complexity of several problems related to the verification and existence of these equilibria. Finally, we investigate natural means to reduce the inefficiency of Nash equilibria. Most promisingly, we show that by fixing the strategies of kk players the price of anarchy can be reduced to n/kn/k (and this bound is tight)

    A New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and its Algorithmic Applications

    Full text link
    A pair of unit clauses is called conflicting if it is of the form (x)(x), (xˉ)(\bar{x}). A CNF formula is unit-conflict free (UCF) if it contains no pair of conflicting unit clauses. Lieberherr and Specker (J. ACM 28, 1981) showed that for each UCF CNF formula with mm clauses we can simultaneously satisfy at least \pp m clauses, where \pp =(\sqrt{5}-1)/2. We improve the Lieberherr-Specker bound by showing that for each UCF CNF formula FF with mm clauses we can find, in polynomial time, a subformula FF' with mm' clauses such that we can simultaneously satisfy at least \pp m+(1-\pp)m'+(2-3\pp)n"/2 clauses (in FF), where n"n" is the number of variables in FF which are not in FF'. We consider two parameterized versions of MAX-SAT, where the parameter is the number of satisfied clauses above the bounds m/2m/2 and m(51)/2m(\sqrt{5}-1)/2. The former bound is tight for general formulas, and the later is tight for UCF formulas. Mahajan and Raman (J. Algorithms 31, 1999) showed that every instance of the first parameterized problem can be transformed, in polynomial time, into an equivalent one with at most 6k+36k+3 variables and 10k10k clauses. We improve this to 4k4k variables and (25+4)k(2\sqrt{5}+4)k clauses. Mahajan and Raman conjectured that the second parameterized problem is fixed-parameter tractable (FPT). We show that the problem is indeed FPT by describing a polynomial-time algorithm that transforms any problem instance into an equivalent one with at most (7+35)k(7+3\sqrt{5})k variables. Our results are obtained using our improvement of the Lieberherr-Specker bound above
    corecore